Relative Expected Instantaneous Loss Bounds
نویسندگان
چکیده
منابع مشابه
Relative Expected Instantaneous Loss Bounds
In the literature a number of relative loss bounds have been shown for on-line learning algorithms. Here the relative loss is the total loss of the on-line algorithm in all trials minus the total loss of the best comparator that is chosen off-line. However, for many applications instantaneous loss bounds are more interesting where the learner first sees a batch of examples and then uses these e...
متن کاملRelative loss bounds for single neurons
We analyze and compare the well-known gradient descent algorithm and the more recent exponentiated gradient algorithm for training a single neuron with an arbitrary transfer function. Both algorithms are easily generalized to larger neural networks, and the generalization of gradient descent is the standard backpropagation algorithm. In this paper we prove worst-case loss bounds for both algori...
متن کاملRelative Deviation Learning Bounds and Generalization with Unbounded Loss Functions
We present an extensive analysis of relative deviation bounds, including detailed proofs of twosided inequalities and their implications. We also give detailed proofs of two-sided generalization bounds that hold in the general case of unbounded loss functions, under the assumption that a moment of the loss is bounded. These bounds are useful in the analysis of importance weighting and other lea...
متن کاملModeling Expected Loss
This paper develops a methodology for modeling and estimating expected loss over arbitrary horizons. We jointly model the probability of default and the recovery rate given default. Different model specifications are estimated using an extensive default and recovery data set that contains the majority of defaults between 1980–2004 of AMEX, NYSE and NASDAQ listed companies. We undertake extensiv...
متن کاملRelative Entropy Derivative Bounds
We show that the derivative of the relative entropy with respect to its parameters is lower and upper bounded. We characterize the conditions under which this derivative can reach zero. We use these results to explain when the minimum relative entropy and the maximum log likelihood approaches can be valid. We show that these approaches naturally activate in the presence of large data sets and t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Computer and System Sciences
سال: 2002
ISSN: 0022-0000
DOI: 10.1006/jcss.2001.1798